Goto

Collaborating Authors

 Amador County


Retrieval and Augmentation of Domain Knowledge for Text-to-SQL Semantic Parsing

Patwardhan, Manasi, Agarwal, Ayush, Bhaisaheb, Shabbirhussain, Arora, Aseem, Vig, Lovekesh, Sarawagi, Sunita

arXiv.org Artificial Intelligence

Abstract--The performance of Large Language Models (LLMs) for translating Natural Language (NL) queries into SQL varies significantly across databases (DBs). NL queries are often expressed using a domain specific vocabulary, and mapping these to the correct SQL requires an understanding of the embedded domain expressions, their relationship to the DB schema structure. Existing benchmarks rely on unrealistic, ad-hoc query specific textual hints for expressing domain knowledge. In this paper, we propose a systematic framework for associating structured domain statements at the database level. We present retrieval of relevant structured domain statements given a user query using sub-string level match. We evaluate on eleven realistic DB schemas covering diverse domains across five open-source and proprietary LLMs and demonstrate that (1) DB level structured domain statements are more practical and accurate than existing ad-hoc query specific textual domain statements, and (2) Our sub-string match based retrieval of relevant domain statements provides significantly higher accuracy than other retrieval approaches. The impressive natural language understanding and code generation capabilities of modern LLMs has led to significantly improved performance on NL-SQL semantic parsing [1], [2]. However, their accuracy varies widely with the database queried [3]. DBs in WikiSQL [4] or Spider [5] contain semantically meaningful table/column names and cell values making it easier for LLMs to accurately link domain expressions in the NL query with the DB schema/cell elements.


Exploring Chain-of-Thought Style Prompting for Text-to-SQL

Tai, Chang-You, Chen, Ziru, Zhang, Tianshu, Deng, Xiang, Sun, Huan

arXiv.org Artificial Intelligence

In-context learning with large language models (LLMs) has recently caught increasing attention due to its superior few-shot performance on various tasks. However, its performance on text-to-SQL parsing still has much room for improvement. In this paper, we hypothesize that a crucial aspect of LLMs to improve for text-to-SQL parsing is their multi-step reasoning ability. Thus, we systematically study how to enhance LLMs' reasoning ability through chain of thought (CoT) style prompting, including the original chain-of-thought prompting (Wei et al., 2022b) and least-to-most prompting (Zhou et al., 2023). Our experiments demonstrate that iterative prompting as in Zhou et al. (2023) may be unnecessary for text-to-SQL parsing, and using detailed reasoning steps tends to have more error propagation issues. Based on these findings, we propose a new CoT-style prompting method for text-to-SQL parsing. It brings 5.2 and 6.5 point absolute gains on the Spider development set and the Spider Realistic set, respectively, compared to the standard prompting method without reasoning steps; 2.4 and 1.5 point absolute gains, compared to the least-to-most prompting method.